Skip to content

[SPARK-51418][SQL] Fix DataSource PARTITON TABLE w/ Hive type incompatible partition columns#50182

Closed
yaooqinn wants to merge 5 commits intoapache:masterfrom
yaooqinn:SPARK-51418
Closed

[SPARK-51418][SQL] Fix DataSource PARTITON TABLE w/ Hive type incompatible partition columns#50182
yaooqinn wants to merge 5 commits intoapache:masterfrom
yaooqinn:SPARK-51418

Conversation

@yaooqinn
Copy link
Member

@yaooqinn yaooqinn commented Mar 6, 2025

What changes were proposed in this pull request?

25/03/06 08:25:17 WARN HiveExternalCatalog: Hive incompatible types found: timestamp_ntz. Persisting data source table `spark_catalog`.`default`.`c` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
org.apache.spark.sql.AnalysisException: org.apache.hadoop.hive.ql.metadata.HiveException: InvalidObjectException(message:Invalid partition column type: timestamp_ntz)

The partition columns are duplicated and stored both in the HMS column meta and the table properties. If they contain incompatible data types, the HMS Meta API will fail the process.

We can rely on the table properties to read/write

Why are the changes needed?

bugfix, otherwise, newly added spark data types are not able to be used as partition columns

Does this PR introduce any user-facing change?

Yes, More type cases are supported for partitioned datasouce tables stored in HMS

How was this patch tested?

new tests

Was this patch authored or co-authored using generative AI tooling?

no

@github-actions github-actions bot added the SQL label Mar 6, 2025
@yaooqinn yaooqinn marked this pull request as draft March 6, 2025 10:15
@yaooqinn yaooqinn marked this pull request as ready for review March 10, 2025 03:00
Copy link
Member

@dongjoon-hyun dongjoon-hyun left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1, LGTM (Pending CIs). Thank you, @yaooqinn .

@yaooqinn
Copy link
Member Author

Thank you @dongjoon-hyun

The CI has already passed https://github.com/yaooqinn/spark/actions/runs/13718779026, but w/o being properly updated here.

Let's wait for the GA for the latest sync.

@yaooqinn yaooqinn closed this in f11bc75 Mar 10, 2025
yaooqinn added a commit that referenced this pull request Mar 10, 2025
…tible partition columns

### What changes were proposed in this pull request?

```
25/03/06 08:25:17 WARN HiveExternalCatalog: Hive incompatible types found: timestamp_ntz. Persisting data source table `spark_catalog`.`default`.`c` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
org.apache.spark.sql.AnalysisException: org.apache.hadoop.hive.ql.metadata.HiveException: InvalidObjectException(message:Invalid partition column type: timestamp_ntz)
```

The partition columns are duplicated and stored both in the HMS column meta and the table properties. If they contain incompatible data types, the HMS Meta API will fail the process.

We can rely on the table properties to read/write

### Why are the changes needed?
bugfix, otherwise, newly added spark data types are not able to be used as partition columns

### Does this PR introduce _any_ user-facing change?
Yes, More type cases are supported for partitioned datasouce tables stored in HMS

### How was this patch tested?
new tests

### Was this patch authored or co-authored using generative AI tooling?
no

Closes #50182 from yaooqinn/SPARK-51418.

Authored-by: Kent Yao <yao@apache.org>
Signed-off-by: Kent Yao <yao@apache.org>
(cherry picked from commit f11bc75)
Signed-off-by: Kent Yao <yao@apache.org>
@yaooqinn yaooqinn deleted the SPARK-51418 branch March 10, 2025 05:31
@yaooqinn
Copy link
Member Author

Merged to master and 4.0, thank you again @dongjoon-hyun

zifeif2 pushed a commit to zifeif2/spark that referenced this pull request Nov 14, 2025
…tible partition columns

### What changes were proposed in this pull request?

```
25/03/06 08:25:17 WARN HiveExternalCatalog: Hive incompatible types found: timestamp_ntz. Persisting data source table `spark_catalog`.`default`.`c` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
org.apache.spark.sql.AnalysisException: org.apache.hadoop.hive.ql.metadata.HiveException: InvalidObjectException(message:Invalid partition column type: timestamp_ntz)
```

The partition columns are duplicated and stored both in the HMS column meta and the table properties. If they contain incompatible data types, the HMS Meta API will fail the process.

We can rely on the table properties to read/write

### Why are the changes needed?
bugfix, otherwise, newly added spark data types are not able to be used as partition columns

### Does this PR introduce _any_ user-facing change?
Yes, More type cases are supported for partitioned datasouce tables stored in HMS

### How was this patch tested?
new tests

### Was this patch authored or co-authored using generative AI tooling?
no

Closes apache#50182 from yaooqinn/SPARK-51418.

Authored-by: Kent Yao <yao@apache.org>
Signed-off-by: Kent Yao <yao@apache.org>
(cherry picked from commit d55a8ba)
Signed-off-by: Kent Yao <yao@apache.org>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants